perm filename PAPER[4,KMC] blob
sn#049377 filedate 1973-06-14 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00007 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .DEVICE XGP
C00004 00003 .BEGIN CENTER
C00006 00004 .SEC INTRODUCTION
C00031 00005 IS ... COMING → NO, ... COULD NOT MAKE IT.
C00032 00006 The order of rules in a function definition does not specify
C00049 00007 that a complete analysis of every word or phrase of an utterance is
C00054 ENDMK
C⊗;
.DEVICE XGP
.!XGPLFTMAR ← 4 ;
.!XGPCOMMANDS ← "/PMAR=2400/XLINE=9" ;
.FONT 1 "NGR30" ;
.FONT B "CLAR30" ;
.FONT F "FIX25" ;
.FONT 2 "SUP" ;
.TURN ON "α#{%" ;
.PAGE FRAME 60 HIGH 57 WIDE
.AREA TEXT LINES 1 TO 60 CHARS 1 TO 57
.NEXT PAGE
.MACRO SEC(NAME) ⊂IF LINES < 8*SPREAD THEN NEXT PAGE; SKIP SPREAD ;
.ONCE CENTER
%B{}NAME%*
.BREAK⊃
.MACRO SS(NAME) ⊂IF LINES < 5*SPREAD THEN NEXT PAGE; SKIP SPREAD ;
.ONCE FLUSH LEFT
%B{}NAME %*
.ONCE PREFACE 1⊃
.MACRO B ⊂ SKIP 1 ; BEGIN SELECT F ;
.GROUP ; INDENT 6 ; NOFILL ; SINGLE SPACE ;⊃ ;
.MACRO E ⊂END⊃
.MACRO EC ⊂ SKIP 1; END CONTINUE ⊃
.MACRO EB ⊂ E ; B ⊃
.BEGIN CENTER
IDIOLECTIC LANGUAGE-ANALYSIS FOR
UNDERSTANDING DOCTOR-PATIENT DIALOGUES*
Horace Enea and Kenneth Mark Colby
Department of Computer Science
Stanford University
Stanford, California
.END
.PREFACE 1
.INDENT 4
.GROUP SKIP 20
.BEGIN PREFACE 0
.ONCE FLUSH LEFT
---------------------------------------------------------
.ONCE INDENT 0 ;
* This research is supported by Grant PHS MH 06645-12 from the
National Institute of Mental Health, by (in part) Research Scientist
Award (No. 1-K05-K14,433) from the National Institute of Mental
Health to the second author and (in part) by the Advanced Research
Projects Agency of the Office of the Secretary of Defense(SD-183).
.ONCE FLUSH LEFT
---------------------------------------------------------
.ONCE INDENT 0 ;
.END ;
.SEC ABSTRACT
A programming language is described which is designed to simplify
the construction of computer programs to analyze English. This
system attempts to merge the best features of pattern matchers and
the phrase structure approach to language analysis. Several
practival problems which occur in dealing with such a system are
described.
.SEC INTRODUCTION
Why is it so difficult for machines to understand natural
language? Perhaps it is because machines do not simulate
sufficiently what humans do when humans process language. Several
years of experience with computer science and linguistic approaches
have taught us the scope and limitations of syntactic and semantic
parsers.(Schank,Tesler and Weber,%28%* Simmons,%29%* Winograd,%213%*
Woods%214%*). While extant linguistic parsers perform
satisfactorily with carefully edited text sentences or with small
dictionaries , they are unable to deal with everyday language
behavior characteristic of human conversation. In a rationalistic
quest for certainty and attracted by an analogy from the proof
theory of logicians in which provability implied computability,
computational linguists hoped to develop formalisms for natural
language grammars. But the hope has not been realized and perhaps in
principle cannot be. (It is difficult to formalize something which
can hardly be formulated).
Linguistic parsers use morphographemic analyses,
parts-of-speech assignments and dictionaries containing multiple
word-senses each possessing semantic features, programs or rules for
restricting word combinations. Such parsers perform a detailed
analysis of every word, valiantly disambiguating at each step in an
attempt to construct a meaningful interpretation. While it may be
sophisticated computationally, a conventional parser is quite at a
loss to deal with the caprice of ordinary conversation. In everyday
discourse people speak colloquially and idiomatically using all
sorts of pat phrases, slang and cliches. The number of
special-case expressions is indefinitely large. Humans are cryptic
and elliptic. They lard even their written expressions with
meaningless fillers and fragments.They convey their intentions and
ideas in idiosyncratic and metaphorical ways, blithely violating
rules of 'correct' grammar and syntax. Given these difficulties,
how is it that people carry on conversations easily most of the time
while machines thus far have found it extremely difficult to
continue to make appropriate replies indicating some degree of
understanding?
It seems that people `get the message' without always
analyzing every single word in the input. They even ignore some of
its terms. People make individualistic and idiosyncratic selections
from highly redundant and repetitious communications. These
personal selective operations, based on idiosyncratic intentions,
produce a transformation of the input by destroying and even
distorting information. In speed reading, for example, only a small
percentage of contentive words on each page need be looked at.
These words somehow resonate with the readers relevant
conceptual-belief structure whose processes enable him to
`understand' not simply the language but all sorts of unmentioned
aspects about the situations and events being referred to in the
language. Normal written English text is estimated to be 5/6
redundant (Rubenstein and Haberstroh%27%*). Spoken conversations in
English are probably better than 50α% redundant(Carroll%21%*). Words
can be garbled and listeners nonetheless get the gist or drift of
what is being said. They see the "pattern" and thus can supply much
of what is missing.
To approximate such human achievements we require a new
perspective and a practical method which differs from that of
current linguistic approaches. This alternate approach should
incorporate those aspects of parsers which have been found to work
well, e.g., detecting embedded clauses. Also individualistic features
characteristic of an idiolect should have dominant emphasis. Parsers
represent complex and refined algorithms. While on one hand they
subject a sentence to a detailed and sometimes overkilling analysis,
on the other, they are finicky and oversensitive. A conventional
parser may simply halt if a word in the input sentence is not
present in its dictionary. It finds ungrammatical expressions such
as double prepositions (`Do you want to get out of from the
hospital?') quite confusing. Parsers constitute a tight conjunction
of tests rather than a loose disjunction. As more and more tests
are added to the conjunction, the parser behaves like a finer and
finer filter which makes it increasingly difficult for an expression
to pass through.
Parsers do not allow for the exclusions typical of everyday
human dialogues.
Finally, it is difficult to keep consistent a dictionary of
over 500 multiple-sense words classified by binary semantic features
or rules. For example, suppose a noun (Ni) is used by some verbs as
a direct object in the semantic sense of a physical object. Then it
is noticed that Ni is also used by other verbs in the sense of a
location so `location' is added to Ni's list of semantic features.
Now Ni suddenly qualifies as a direct object for a lot of other
verbs. But some of the resultant combinations make no sense even in
an idiolect. If a special feature is then created for Ni, then one
loses the power of general classes of semantic features. Adding a
single semantic feature can result in the propagation of hidden
inconsistencies and unwanted side-effect.. as the dictionary grows
it becomes increasingly unstable and difficult to control.
Early attempts to develop a pattern-matching approach using
special-purpose heuristics have been described by Colby, Watt and
Gilbert,%22%* Weizenbaum%211%* and Colby and Enea.%23%* The
limitations of these attempts are well known to workers in
artificial intelligence. The man-machine conversations of such
programs soon becomes impoverished and boring. Such primitive
context-restricted programs
often grasp a topic well enough but too often do not understand quite
what is being said about the topic, with amusing or disastrous
consequences. This shortcoming is a consequence of the limitations
of a pattern- matching approach lacking a rich conceptual structure
into which the pattern abstracted from the input can be matched for
inferencing. These programs also lack a subroutine structure, both
pattern directed and specific, desirable for generalizations and
further analysis.
The strength of these pattern matching approaches lies in
their ability to ignore some of the input. They look for patterns,
which means the emphasis of some detail to the exclusion of other
detail.
Thus they can get something out of nearly every sentence-- sometimes
more, sometimes less.
An interesting pattern-matching approach for machine translation
has been developed by Wilks.%212%* His program constructs a pattern
from English text input which is matched against templates in an
interlingual data base from which,in turn, French output is
generated without using a generative grammar.
In the course of constructing an interactive simulation of
paranoia we were faced with the problem of dealing with unedited and
unrestricted natural language as it is used in the doctor-patient
situation of a psychiatric interview.(Colby, Hilf, Weber, and
Kraemer,%24%* Colby and Hilf%25%*). This domain of discourse
admittedly contains many psychiatrically stereotyped expressions and
is constrained in topics (Newton`s laws are rarely discussed). But
it is rich enough in verbal behavior to be a challenge to a language
understanding algorithm since a variety of human experiences are
discussed domain including the interpersonal relation which develops
between the interview participants. A look at the contents of a
thesaurus reveals that words relating to people and their
interrelations make up roughly 70α% of English words.
The diagnosis of paranoia is made by psychiatrists relying
mainly on the verbal behavior of the interviewed patient. If a
paranoid model is to exhibit paranoid behavior in a psychiatric
interview, it must be capable of handling dialogues typical of the
doctor-patient context. Since the model can communicate only
through teletyped messages,the vis-a-vis aspects of the usual
psychiatric interview are absent. Therefore the model must be able
to deal with unedited typewritten natural language input and to
output replies which are indicative of an underlying paranoid
thought process during the episode of a psychiatric interview.
In an interview there is always a who saying something to a
whom with definite intentions and expectations. There are two
situations to be taken into account, the one being talked about and
the one the participants are in. Sometimes the latter becomes the
former. Participants in dialogues have intentions and dialogue
algorithms must take this into account. The doctor's intention is
to gather certain kinds of information while the patient's intention
is to give information and get help. A job is to be done; it is not
small talk. Our working hypothesis is that each participant in the
dialogue understands the other by matching selected
idiosyncratically- significant patterns in the input against
conceptual patterns which contain information about the situation or
event being described linguistically. This understanding is
communicated reciprocally by linguistic responses judged appropriate
to the intentions and expectations of the participants and to the
requirements of the situation. In this paper we shall describe only
our current input-analyzing processes used to extract a
pattern from natural language input. In a later communication we
shall describe the inferential processes carried out at the
conceptual level once a pattern has been received by memory from the
input-analysing processes.
Studies of our 1971 model of paranoia (PARRY) indicated that
about thirty percent of the sentences were not understood at all ,
that is, no concept in the sentence was recognized. In a somewhat
larger number of cases some concepts, but not all, were recognized.
In many cases these partially recognized sentences lead to a partial
understanding that was sufficient to gather the intention of the
speaker and thus lead to output an appropriate response. However,
misunderstandings occurred too often. For example:
.B
DOCTOR: How old is your mother ?
PARRY: Twenty-eight
.EC
PARRY has interpreted the question as referring to his own age and
answered by giving his age. The purpose of our new language
analysis system is to significantly raise the level of understanding
by preventing such misunderstandings while not restricting what can
be said to PARRY. We do not expect complete under- standing from
this system -- even native speakers of the language do not
completely understand the language.
By `understanding' we mean the system can do some or all of
the following:
.BEGIN INDENT 10,10,10
1) Determine the intention of the interviewer in making a particular
utterance.
2) Make common logical deductions that follow from the interviewers
utterance
3) Form an idioletic internal representation of the utterance so
that questions may be answered, commands carried
out, or data added to memory.
4) Determine references for pronouns, and other anaphora.
5) Deduce the tone of the utterance,i.e., hostile,
insulting...
6) Classify the input as a question, rejoinder,command, ...
.END
The approach we are taking consists of merging the best
features of pattern directed systems such as the MAD DOCTOR,%22%*
ELIZA%211%* and parsing directed systems for example,
Winograd,%213%* Woods.%214%*. By merging the BNF phrase structure
approach ot analyzing English with the pattern matching approach,
with its attendant emphasis of some concepts to the exclusion of
others. The programs to accomplish this are written in MLISP2, an
extensible version of the programming language MLISP,%26,10%* and
uses an interpreted version of the pattern matcher designed for a
new programming language LISP70.
The following is a basic description of the pattern matcher.
We shall illustrate its operation using examples of problems common
to teletyped psychiatric dialogues.
.SEC PATTERN MATCHING
Pattern directed computation involves two kind of operations
on data structures: decomposition and recomposition. Decomposition
breaks down an input stream into components under the direction of a
decompostion pattern ("dec"). The inverse operation,
recomposition, constructs an output stream under the direction of a
recomposition pattern ("rec").
.BEGIN GROUP
A rewrite rule is of the form:
.B
dec → rec
.EC
.EC
It defines a partial function on streams as follows: if the input
stream matches the dec, then the output stream is generated by the
rec. The following rule (given as an example only) could be part of
a question answering function:
.B
How are you ? → Very well and you ?
.EC
If the input stream consists of the four tokens:
.B
How are you ?
.EC
the output stream will consist of the five tokens:
.B
Very well and you ?
.E
.SS REWRITE FUNCTIONS
A rewrite rule defines a partial function, for example, the
mapping of some particular token into some other particular token.
A broader partial function can be defined as the union of several
rewrite rules. A rewrite function definition is of the form:
.B
RULES OF <name> =
dec1 → rec1,
dec2 → rec2,
...
decn → recn;
.E
.SS VARIABLES
A function is difficult to define if every case must be
enumerated. Therefore, rewrite rules allow variables to appear in
patterns. The value of a variable can be either a list or an atom.
In this paper the notation:
.B
:X
.EC
where X ia any identifier, will denote the variable X. The
variables of each rule are distinct from the variables of all
other rules, even if their names are the same.
The following definition has only three rewrite rules,
but handles an unlimited number of input streams:
.B
RULES OF REPLY=
HOW ARE YOU '? → VERY WELL '?
HOW IS :X → I HAVEN''T SEEN :X ',
LATELY'.,
DID :X GO TO :Y '? →
WHY DON''T YOU ASK :X YOURSELF'?;
.EC
A variable can appear more than once in a single dec pattern, but
it must match identical items at each appearance. Example:
.B
RULES OF EQUAL =
(EQUAL :X :X) → TRUE;
.E
.SS ELLIPSIS
To make patterns easier to read and write, the ellipsis
symbol ... can be used to stand for an unnamed variable. Thus:
.B
IS ... COMING → NO, ... COULD NOT MAKE IT.
.EC
If an ellipsis (...) occurs several times on a side, it designates a
different variable each time. The n'th ellipsis in a dec designates
the same variable as the n'th ellipsis in the rec.
Ellipsis is one of the principle ideas of pattern matching.
It permits imprecise mathching; that is, the emphasis or ignoring of
items.
.SS AUTOMATIC ORDERING OF RULES
The order of rules in a function definition does not specify
the order in which the system will attempt to apply them. This
ordering operation is handled by a special system ordering function.
Consider the rewrite function:
.B
RULES OF REPLY =
I SEE :X → SO WHAT '?,
I SEE ANN → WOW '!;
.EC
Both rules would match:
.B
I SEE ANN
.EC
In such cases the more specific rule takes precedence. Thus, given:
.B
I SEE ANN
.EC
as the input stream , the output stream would be:
.B
WOW !
.EC
but given:
.B
I SEE STARS
.EC
the output stream would be:
.B
SO WHAT ?
.EC
A literal is more specific than a variable. A variable appearing for
the second time is more specific than a variable appearing for the
first time in a dec. This is so because the second occurence of the
variable must match the same pattern as the first occurence. The
precedence function is itself written in rewrites and so is both
extendable and changable by the user. Currently precedence is
calculated by a left to right application of the above criteria.
Therefore, the following function defines the LISP function EQUAL:
.B
RULES OF EQUAL =
(EQUAL :X :X) → T,
(EQUAL :X :Y) → NIL;
.E
.SS SEGMENTS
Sometimes it is desirable for a variable to match an
indeterminate number of items. This is notated:
.B
::X
.EC
Use of the double-colon ("::") means that the variable (e.g., X)
will match zero or more items. Example:
.B
RULES OF APPEND=
(APPEND (::X)(::Y)) → (::X ::Y);
.EC
or if the input stream were:
.B
(APPEND (A B) (C D E))
.EC
the output stream would be:
.B
(A B C D E)
.EC
For increased readability the rule could also be written:
.B
RULES OF APPEND =
(APPEND (...) (...)) → (... ...);
.EC
Another example:
.B
RULES OF REPLY =
WHERE DID ::X GO →
::X WENT HOME '.;
.EC
Therefore,
.B
WHERE DID THE CARPENTER GO →
THE CARPENTER WENT HOME.
.E
.SS APPLICATION
One of the main deficiencies of the system in which the MAD
DOCTOR was programmed was its lack of adequate subroutining
capability. Subroutines may be indicated in the rewrite system as
follows:
.B
RULES OF LAST =
() → (),
(:X) → :X,
(:X ...) → <LAST (...)>;
.EC
The "<>" surrounding a pattern means that the current input stream
is to be pushed down, that the function indicated by the first token
within the brackets is to be entered with the rest of the pattern
appended to the front of the input stream, and that the output
stream is to be placed into the restored current input stream. Note
that MLISP2 functions may be called as well as rewrite functions.
.SS GOALS
To gain the advantage of goal directed pattern matching and
computing, as well as the full power of context sensitive grammars,
the following form may be used:
.B
RULES OF PREPOSITIONAL_PHRASE =
<PREPOSITION>:P <NOUN_PHRASE>:N
→ (PREP_PH :P :N);
.EC
The identifer between the angled brackets ("<>") names a rewrite
function the rules of which are to be matched against the input
stream. When a match occurs the output stream of the goal will be
bound to the associated variable. Example:
.B
RULES OF PREPOSITIONAL_PHRASE =
<PREPOSITION>:P <NOUN_PHRASE>:N
→ (PREP_PH :P :N);
RULES OF NOUN_PHRASE =
TOWN → (NOUN_PH TOWN),
PALO ALTO → (NOUN_PH PALO_ALTO);
RULES OF PREPOSITON =
IN → IN,
ON → ON;
.EC
and the input stream:
.B
IN PALO ALTO
.EC
the output stream would be:
.B
(PREP_PH IN (NOUN_PH PALO_ALTO))
.E
.SS OPTIONALS
Many other shorthands exist to simplify writing rules. One
useful feature that will be mentioned here is the optional.
.B
RULES OF AUXILARY_PHRASE =
<AUXILARY>:A [<NEGATIVE>:N]:N1 →
(AUX_PH :A [:N]:N1 );
.EC
If the optional pattern, enclosed in square brackets ("[]"), occurs
in the input stream it will be bound to :N. :N1 will be bound to 2.
If the <NEGATIVE> does not occur, :N1 will be bound to 1. On the rec
side of the rules if :N1 is 2 then :N will be placed in the output
stream. If it is 1 then nothing is placed in the output stream at
that point. Example, given the rule above:
.B
DO → (AUX_PH DO)
DO NOT → (AUX_PH DO NOT)
.E
.SS MORE EXAMPLES
We have collected a large number of dialogues using our
previous program PARRY. These dialogues form a large body of
examples of the kind of English which we can expect. Martin Frost, a
graduate student in Computer Science, Stanford University, has
written a keyword in context program which enables us to isolate
examples centered on particular words so that uses of thoses words
in context become more apparent. Our general approach is to build a
system which can produce desired intreptations from these examples
and to incrementally add to the rules in the system as new cases are
discovered during the running of the program.
Following are some examples of commonly occuring situations and
examples of the kind of rules we use to handle them.
.SS QUESTION INTRODUCER
In doctor-patient dialogues it is quite common to introduce
a question by the use of a command. The "question introducer" is
followed by either a <NOUN_PHRASE> or a <DECLARATIVE_SENTENCE>. For
example,
.B
COULD YOU TELL ME YOUR NAME?
.EC
Rather than attempt a literal analysis of this question, which might
lead to the interpretation:
.B
DO YOU HAVE THE ABILITY TO SPEAK YOUR NAME TO ME?
.EC
we utilize rules like:
.B
RULES OF SENTENCE =
<QUESTION_INTRODUCER>:Q <NOUN_PHRASE>:N
→ (IS :N '*'?'* );
RULES OF QUESTION_INTRODUCER =
COULD YOU TELL ME → ,
WOULD YOU TELL ME → ,
PLEASE TELL ME → ;
.E
Although it is conceivable that there are an infinite number
of ways to introduce a question in this manner, we have found only
about six literal strings are actually used in our data base of
dialogues. When we discover a new string we incrementally add a
rule. When we have enough examples to dectect a more general form
we replace the rules for <QUESTION_INTRODUCER> by a more elegant and
general formulation. This approach allows us to process dialogues
before we have a complete analysis of all possible sentence
constructions, and it allows us to build a language analyzer based
on actually occurring forms.
Notice that it is possible to make more than one analysis of
any given sentence depending on what is being looked for. A poet
might be interested in the number of syllables per word and the
patterns of stress. A "full" analysis of English must allow for
this possibility, but it it clearly foolish to produce this kind of
analysis for PARRY. Our analysis will be partial and idiosyncratic
to the needs of our program. This is what is meant by idiolectic.
.SS FILLERS
It is quite common for interviewers to introduce words of
little significance to PARRY into the sentence. For example:
.B
WELL, WHAT IS YOUR NAME?
.EC
The "well" in this sentence serves no purpose in PARRY's analysis,
although it might to a linguist interested in hesitation phenomena.
These fillers can be ignored. The following rules accomplish this:
.B
RULES OF SENTENCE =
<FILLERS>:F <SENTENCE>:S → :S;
RULES OF FILLERS =
WELL → ,
OK → ;
.E
.SS PUNCTUATION
Interviewers use little intra-sentence punctuation in
talking to PARRY. When it is used it is often to seperate phrases
that might otherwise be ambiguous. Example:
.B
WHY WEREN'T YOU VERY CLOSE, FRANK
.EC
Here the comma clearly puts "CLOSE" in a different phrase from
"FRANK". Punctuation, when used in PARRY's rules, is generally
enclosed in optional brackets ("[]"). This has the effect of
seperating phrases when punctuation is used, but not requiring full
punctuation for the system to work. Example:
.B
RULES OF SENTENCE =
<SENTENCE>:S1 [',]:C <SENTENCE_CONNECTOR>:SC
<SENTENCE>:S2
→ (CONUNCTION :SC :S1 :S2);
.E
.SS CLICHES AND IDIOMS
The English we encounter in doctor-patient dialogues is made
up of a great number of cliches and idioms, therefore we anticipate
a large number of rules devoted to them. For example:
.B
RULES OF TIME_PHRASES =
A COUPLE OF <TIME_UNIT>:T AGO
→ (TIME (RELATIVE PAST)(REF PRESENT) :T);
RULES OF TIME_UNIT =
SECONDS → (WITHIN CONVERSATION),
MOMENTS → (WITHIN CONVERSATION),
DAYS → (BEFORE CONVERSATION DAYS);
.E
.SS REPRESENTATION CORRECTION
Intermediate results are often produced which are misleading
in meaning or are in the wrong form for further processing. We,
therefore, incorporate at various points rules which detect certain
undesired intermediate results and convert them to the desired form.
Example:
.B
RULES OF CORRECT_FORM =
(QUESTION ... (SENTENCE ...)) →
(QUESTION ... ...);
.E
.SS UNKNOWN WORDS
Rules can be derived to handle words which were previously
unknown to the system. For example:
.B
RULES OF UNKNOWN_WORD =
DR'. :X → <NEW_WORD NAME :X>,
THE :X <VERB_PHRASE>:V →
<NEW_WORD NOUN :X>,
I :X YOU → <NEW_WORD VERB :X>;
.EC
Here "NEW_WORD" is a function which adds new words to the
dictionary.
.SS CONCLUSION
We are faced with the problems of natural language being
used to interview people in a doctor-patient context. We have
developed a language processing system which we believe is capable
of performing in these interviews at a significantly improved level
of performance compared to systems used in the past. We have
developed techniques which can measure performance in comparison
with the ideal of a real human patient in the same context.%24,5,7%*
We are designing our system with the realization that a long period
of development is necessary to reach desired levels of performance.
This is a system that can work at a measured level of performance
and be improved over time with new rules having minimum interaction
with those already existing. Our system is designed so
that a complete analysis of every word or phrase of an utterance is
not neceesary.
The basis of this system is a rewrite interpreter which will
automatically merge new rules into the set of already existing rules
so that the system will continue to handle sentences which it
handled in the past.
.SEC REFERENCES
.INDENT 0, 3
%21%*Carroll, J.B. Language and Thought. Prentice-Hall, Englewood Cliffs,
New Jersey, p. 59.
%22%*Colby, K.M., Watt, J. and Gilbert, J.P. A computer method of
psychotherapy. Journal of Nervous and Mental Disease,
142,148-152,1966.
%23%*Colby,K.M. and Enea,H. Heuristic methods for computer understanding
of natural language in context restricted on-line dialogues.
Mathematical Biosciences,1,1-25,1967.
%24%*Colby, K.M., Hilf, F.D., Weber, S., and Kraener, H. Turing-like
indistinguishability tests for the validation of a computer
simulation of paranoid processes. Artificial
Intelligence,3,199-221,1972.
%25%*Colby, K.M. and Hilf, F.D. Multidimensional analysis in
evaluating the adequacy of a simulation of paparnoid processes.
Memo AIM-194. Stanford Artificial Intelligence project, Stanford
University.
%26%*Enea, H. MLISP, Technical report no. CS-92, 1968,
Computer Science Department, Stanford University.
%27%*Rubenstein, A.H. and Haberstroh, C. J., Some Theories of
Organization, Dorsey Press, Homewood Ill.,1960, p. 232.
%28%*Schank, R.C., Tesler, L. and Weber,S. Spinoza ii: Conceptual
case-based natural language analysis. Memo AIM-109, 1970, Stanford
Artificial Intelligence Project, Stanford University.
%29%*Simmons, R.F. Some semantic structures for representing English
meanings. Preprint, 1970, Computer Science Department, University
of Texas, Austin.
%210%*Smith, D.C., MLISP, Memo AIM-135, 1970, Stanford Artificial
Intelligence Project, Stanford University.
%211%*Weizenbaum, J. Eliza- a computer program for the study of natural
communication between man and machine. Communications of the ACM,
9,36-45,1966.
%212%*Wilks, Y.A. Understanding without proofs. (See this volume).
%213%*Winograd, T. A program for understanding natural language.
Cognitive Psychology,3,1-191,1972.
%214%*Woods, W.A. Transition network grammars for natural language analysis.
Communications of the ACM,13,591-606,1970.